计算机断层扫描(CT)在临床实践中非常重要,因为它强大的能力在没有任何侵入性检查的情况下提供患者的解剖信息,但其潜在的辐射风险引起了人们的关注。基于深度学习的方法在CT重建中被认为是有希望的,但是这些网络模型通常是通过从特定扫描协议获得的测量数据进行训练的,并且需要集中收集大量数据,这将导致严重的数据域移动,并引起隐私问题。 。为了缓解这些问题,在本文中,我们提出了一种基于超网络的联合学习方法,用于个性化CT成像,称为超fed。超fed的基本假设是,每个机构的优化问题可以分为两个部分:本地数据适应问题和全局CT成像问题,这些问题分别由机构特定的超网络和全球共享成像网络实现。全球共享成像网络的目的是从不同机构学习稳定而有效的共同特征。特定于机构的超网络经过精心设计,以获取超参数,以调节用于个性化本地CT重建的全球共享成像网络。实验表明,与其他几种最先进的方法相比,超档在CT重建中实现了竞争性能。它被认为是提高CT成像质量并达到没有隐私数据共享的不同机构或扫描仪的个性化需求的有希望的方向。这些代码将在https://github.com/zi-yuanyang/hyperfed上发布。
translated by 谷歌翻译
大规模数据集在计算机视觉中起着至关重要的作用。但是当前的数据集盲目注释而没有与样品区分的区分,从而使数据收集效率低下且不计。开放的问题是如何积极地构建大型数据集。尽管先进的主动学习算法可能是答案,但我们在实验上发现它们在分发数据广泛的现实注释方案中是la脚的。因此,这项工作为现实的数据集注释提供了一个新颖的主动学习框架。配备了此框架,我们构建了一个高质量的视觉数据集 - 竹子,由69m的图像分类注释,带有119K类别,带有809个类别的28m对象边界框注释。我们通过从几个知识库中整合的层次分类法来组织这些类别。分类注释比Imagenet22K大四倍,检测的注释比Object365大三倍。与ImagEnet22K和Objects365相比,预先训练的竹子在各种下游任务中实现了卓越的性能(分类的6.2%增长,检测到2.1%的增长)。我们认为,我们的积极学习框架和竹子对于将来的工作至关重要。
translated by 谷歌翻译
Existing 3D scene stylization methods employ an arbitrary style reference to transfer textures and colors as styles without establishing meaningful semantic correspondences. We present Reference-Based Non-Photorealistic Radiance Fields, i.e., Ref-NPR. It is a controllable scene stylization method utilizing radiance fields to stylize a 3D scene, with a single stylized 2D view taken as reference. To achieve decent results, we propose a ray registration process based on the stylized reference view to obtain pseudo-ray supervision in novel views, and exploit the semantic correspondence in content images to fill occluded regions with perceptually similar styles. Combining these operations, Ref-NPR generates non-photorealistic and continuous novel view sequences with a single reference while obtaining reasonable stylization in occluded regions. Experiments show that Ref-NPR significantly outperforms other scene and video stylization methods in terms of both visual quality and semantic correspondence. Code and data will be made publicly available.
translated by 谷歌翻译
注册森林环境的点云是精密林业局部激光雷达应用的必要先决条件。最先进的森林点云登记方法需要提取单个树属性,并且在处理具有致密树的真实森林点云时,它们具有效率的瓶颈。我们提出了一种自动,坚固,高效的方法,用于登记森林点云。我们的方法首先定位树从原料点云茎,然后根据他们的相对空间关系确定准变换茎匹配。相较于现有的方法,我们的算法不需要额外的单株属性,具有线性复杂的环境中的树木数量,允许它的大森林环境对齐点云。广泛的实验表明,我们的方法优于关于登记精度和稳健性的最先进的方法,并且在效率方面显着优于现有技术。此外,我们引入一个新的基准数据集,补充的开发和注册方法评价森林点云的极少数现有的开放的数据集。
translated by 谷歌翻译
如今,随着越来越多的系统在传统的语音转换(VC)任务中实现了良好的性能,人们的注意力在极端条件下逐渐转向VC任务。在本文中,我们提出了一种零射声语音转换的新方法。我们的目标是获取讲话者内容解剖的中间陈述,以更好地删除发言者信息并获得纯净的内容信息。因此,我们所提出的框架包含一种模块,该模块从源扬声器的声学特征中移除扬声器信息。此外,扬声器信息控制被添加到我们的系统中以维持语音克隆性能。所提出的系统由主观和客观度量评估。结果表明,我们提出的系统显着降低了零射声语音转换中的权衡问题,而且还可以对扬声器验证系统进行高欺骗功率。
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译